23 research outputs found
Deep Perceptual Mapping for Thermal to Visible Face Recognition
Cross modal face matching between the thermal and visible spectrum is a much
de- sired capability for night-time surveillance and security applications. Due
to a very large modality gap, thermal-to-visible face recognition is one of the
most challenging face matching problem. In this paper, we present an approach
to bridge this modality gap by a significant margin. Our approach captures the
highly non-linear relationship be- tween the two modalities by using a deep
neural network. Our model attempts to learn a non-linear mapping from visible
to thermal spectrum while preserving the identity in- formation. We show
substantive performance improvement on a difficult thermal-visible face
dataset. The presented approach improves the state-of-the-art by more than 10%
in terms of Rank-1 identification and bridge the drop in performance due to the
modality gap by more than 40%.Comment: BMVC 2015 (oral
A Pose-Sensitive Embedding for Person Re-Identification with Expanded Cross Neighborhood Re-Ranking
Person re identification is a challenging retrieval task that requires
matching a person's acquired image across non overlapping camera views. In this
paper we propose an effective approach that incorporates both the fine and
coarse pose information of the person to learn a discriminative embedding. In
contrast to the recent direction of explicitly modeling body parts or
correcting for misalignment based on these, we show that a rather
straightforward inclusion of acquired camera view and/or the detected joint
locations into a convolutional neural network helps to learn a very effective
representation. To increase retrieval performance, re-ranking techniques based
on computed distances have recently gained much attention. We propose a new
unsupervised and automatic re-ranking framework that achieves state-of-the-art
re-ranking performance. We show that in contrast to the current
state-of-the-art re-ranking methods our approach does not require to compute
new rank lists for each image pair (e.g., based on reciprocal neighbors) and
performs well by using simple direct rank list based comparison or even by just
using the already computed euclidean distances between the images. We show that
both our learned representation and our re-ranking method achieve
state-of-the-art performance on a number of challenging surveillance image and
video datasets.
The code is available online at:
https://github.com/pse-ecn/pose-sensitive-embeddingComment: CVPR 2018: v2 (fixes, added new results on PRW dataset
Deep View-Sensitive Pedestrian Attribute Inference in an end-to-end Model
Pedestrian attribute inference is a demanding problem in visual surveillance
that can facilitate person retrieval, search and indexing. To exploit semantic
relations between attributes, recent research treats it as a multi-label image
classification task. The visual cues hinting at attributes can be strongly
localized and inference of person attributes such as hair, backpack, shorts,
etc., are highly dependent on the acquired view of the pedestrian. In this
paper we assert this dependence in an end-to-end learning framework and show
that a view-sensitive attribute inference is able to learn better attribute
predictions. Our proposed model jointly predicts the coarse pose (view) of the
pedestrian and learns specialized view-specific multi-label attribute
predictions. We show in an extensive evaluation on three challenging datasets
(PETA, RAP and WIDER) that our proposed end-to-end view-aware attribute
prediction model provides competitive performance and improves on the published
state-of-the-art on these datasets.Comment: accepted BMVC 201